259 research outputs found

    Spatial aggregation of local likelihood estimates with applications to classification

    Get PDF
    This paper presents a new method for spatially adaptive local (constant) likelihood estimation which applies to a broad class of nonparametric models, including the Gaussian, Poisson and binary response models. The main idea of the method is, given a sequence of local likelihood estimates (``weak'' estimates), to construct a new aggregated estimate whose pointwise risk is of order of the smallest risk among all ``weak'' estimates. We also propose a new approach toward selecting the parameters of the procedure by providing the prescribed behavior of the resulting estimate in the simple parametric situation. We establish a number of important theoretical results concerning the optimality of the aggregated estimate. In particular, our ``oracle'' result claims that its risk is, up to some logarithmic multiplier, equal to the smallest risk for the given family of estimates. The performance of the procedure is illustrated by application to the classification problem. A numerical study demonstrates its reasonable performance in simulated and real-life examples.Comment: Published in at http://dx.doi.org/10.1214/009053607000000271 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Statistical inference for time-inhomogeneous volatility models

    Full text link
    This paper offers a new approach for estimating and forecasting the volatility of financial time series. No assumption is made about the parametric form of the processes. On the contrary, we only suppose that the volatility can be approximated by a constant over some interval. In such a framework, the main problem consists of filtering this interval of time homogeneity; then the estimate of the volatility can be simply obtained by local averaging. We construct a locally adaptive volatility estimate (LAVE) which can perform this task and investigate it both from the theoretical point of view and through Monte Carlo simulations. Finally, the LAVE procedure is applied to a data set of nine exchange rates and a comparison with a standard GARCH model is also provided. Both models appear to be capable of explaining many of the features of the data; nevertheless, the new approach seems to be superior to the GARCH method as far as the out-of-sample results are concerned

    Parameter tuning in pointwise adaptation using a propagation approach

    Full text link
    This paper discusses the problem of adaptive estimation of a univariate object like the value of a regression function at a given point or a linear functional in a linear inverse problem. We consider an adaptive procedure originated from Lepski [Theory Probab. Appl. 35 (1990) 454--466.] that selects in a data-driven way one estimate out of a given class of estimates ordered by their variability. A serious problem with using this and similar procedures is the choice of some tuning parameters like thresholds. Numerical results show that the theoretically recommended proposals appear to be too conservative and lead to a strong oversmoothing effect. A careful choice of the parameters of the procedure is extremely important for getting the reasonable quality of estimation. The main contribution of this paper is the new approach for choosing the parameters of the procedure by providing the prescribed behavior of the resulting estimate in the simple parametric situation. We establish a non-asymptotical "oracle" bound, which shows that the estimation risk is, up to a logarithmic multiplier, equal to the risk of the "oracle" estimate that is optimally selected from the given family. A numerical study demonstrates a good performance of the resulting procedure in a number of simulated examples.Comment: Published in at http://dx.doi.org/10.1214/08-AOS607 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    An adaptive multiclass nearest neighbor classifier

    Full text link
    We consider a problem of multiclass classification, where the training sample Sn={(Xi,Yi)}i=1nS_n = \{(X_i, Y_i)\}_{i=1}^n is generated from the model P(Y=m∣X=x)=ηm(x)\mathbb P(Y = m | X = x) = \eta_m(x), 1≤m≤M1 \leq m \leq M, and η1(x),…,ηM(x)\eta_1(x), \dots, \eta_M(x) are unknown α\alpha-Holder continuous functions.Given a test point XX, our goal is to predict its label. A widely used k\mathsf k-nearest-neighbors classifier constructs estimates of η1(X),…,ηM(X)\eta_1(X), \dots, \eta_M(X) and uses a plug-in rule for the prediction. However, it requires a proper choice of the smoothing parameter k\mathsf k, which may become tricky in some situations. In our solution, we fix several integers n1,…,nKn_1, \dots, n_K, compute corresponding nkn_k-nearest-neighbor estimates for each mm and each nkn_k and apply an aggregation procedure. We study an algorithm, which constructs a convex combination of these estimates such that the aggregated estimate behaves approximately as well as an oracle choice. We also provide a non-asymptotic analysis of the procedure, prove its adaptation to the unknown smoothness parameter α\alpha and to the margin and establish rates of convergence under mild assumptions.Comment: Accepted in ESAIM: Probability & Statistics. The original publication is available at www.esaim-ps.or

    Two convergence results for an alternation maximization procedure

    Get PDF
    Andresen and Spokoiny's (2013) ``critical dimension in semiparametric estimation`` provide a technique for the finite sample analysis of profile M-estimators. This paper uses very similar ideas to derive two convergence results for the alternating procedure to approximate the maximizer of random functionals such as the realized log likelihood in MLE estimation. We manage to show that the sequence attains the same deviation properties as shown for the profile M-estimator in Andresen and Spokoiny (2013), i.e. a finite sample Wilks and Fisher theorem. Further under slightly stronger smoothness constraints on the random functional we can show nearly linear convergence to the global maximizer if the starting point for the procedure is well chosen
    • …
    corecore